home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Language/OS - Multiplatform Resource Library
/
LANGUAGE OS.iso
/
parallel
/
stanford
< prev
next >
Wrap
Text File
|
1992-04-11
|
17KB
|
358 lines
PARALLEL AND DISTRIBUTED COMPUTATION AT STANFORD
COURSES AND RESEARCH
1991-1992
Name of contact persons: The CS and EE departments have more than a
dozen substantial research projects explicitly targeting parallel and
distributed computing. For information on these contact the director
of the area closest to your interests.
Scientific Computing: Gene Golub
Systems: John Hennessy
AI: Jean-Claude Latombe
Theory: Vaughan Pratt
Address for all contact persons:
Department of Computer Science
Stanford University,
Stanford, CA 94305
E-mail Address for all contact persons:
<surname>@cs.stanford.edu
Telephone numbers:
Golub: 415-723-3125
Hennessy: 415-725-3712
Latombe: 415-723-3432
Pratt: 415-723-2943
COMPUTER ACCESS
Within CS and CSL, mainly workstations and parallel computers:
Workstations: people working on parallel computing have access
to an estimated one to two hundred workstations, mostly DEC
3100's, Sun-4's, and older workstations. Most of these are
configured in clusters with file servers, often just another
workstation but with some larger servers as well including
three Sun-4/490's, which also provide other services such as
timesharing and Xwindow serving. Including machines used by
others in the department brings the total to over four hundred
computers.
Parallel machines: an Alliant, a Sequent Multimax, a Cydrome,
an 8-processor SGI IRIS 380, six SGI 4-processor IRIS 340's, an
SGI 2-processor IRIS 320 VGX (million polygons/sec), 7 DEC
5-processor Fireflies, a 16-processor IPSC-2, and a
32-processor IPSC 860. Under construction is the DASH machine,
the prototype of which has 64 MIPS-3000 processors for an
expected 1600 MIPS and 1/4 Gigabyte RAM.
Elsewhere on campus:
Network access to 6000 registered IP addresses on campus of
which 1400 are Unix machines and the rest are gateways, PC's,
Macintoshes, VAX/VMS's, etc. Parallel machines in other
departments include Convexes, Pyramids, and a Connection
Machine. A major university facility is a 6-processor IBM
ES390 (an upgraded 390 600J).
DEGREES OFFERED
BS, MS, Ph.D.
CURRICULUM
We offer 15 courses explicitly about parallel and distributed
computation, and others that have a substantial parallel computation
component. Parallel and distributed computation is the main or a major
research interest of more than 20 of our faculty.
Remarks relevant to courses offered:
Listed below are those of our courses that are explicitly about
parallel and distributed computation, and those research interests of
the faculty that bear on parallel and distributed computation.
CS140. Concurrent Programming--Principles of concurrent programming,
including processes, mutual exclusion and synchronization,
message-passing and monitors. Emphasis on principles and algorithms,
rather than on implementation.
Prerequisites: 107 and 110. 3 units, Spr (Lam)
EE384/CS244. Computer Networks: Architectures and
Protocols---Objectives of computer networks; network structure and
components; switching techniques (circuit switching and packet
switching); network functions; layered network architectures (the ISO
reference model); data link protocols (character-oriented protocols,
bit-oriented protocols, error checking, window flow control, and
multi-access protocols); network control (datagrams, virtual circuits,
routing, and congestion control); transport and session protocols
(end-to-end communication, interconnection of networks). Examples and
standard protocols are cited for point-to-point, satellite, packet
radio, and local area networks. 3 units, Aut (Cheriton)
CS315A. Parallel Computer Architecture and Programming---Design and
programming of architectures. Survey of different programming models;
study of research and commercial parallel machines designed to support
the shared-memory, message-passing, dataflow, systolic, and
data-parallel paradigms. Interleaved with architectural studies are
lectures on techniques for programming parallel computers.
Implementation trade-offs dealing with synchronization granularity,
communication, data access patterns, and load balancing using case
studies from real applications. Integral programming assignments are
done on one or more commercial multiprocessors.
Prerequisites: 140, 212, and reasonable programming experience.
3 units, Win (Gupta)
CS315B. Parallel Programming Project--Continuation of 315A. A significant
parallel programming project is required. A shared-memory
multiprocessor, and possible message-passing machine and Connection
Machine for doing the projects. Lectures of parallel programming
languages and their implementation, performance debugging of parallel
programs, parallel data structures and algorithms. Guest speakers on
issues in parallel programming. Prerequisites: 315A or consent of
instructor.
3 units, Spr (Gupta)
CS340. Distributed Systems---Overview of distributed systems, primarily
as an extension of uniprocessor operating systems to span networks.
Presents the impact of networking on each of the subsystems and issues
discussed in 240A,B, including basic architectural models;
network-transparent message-passing and remote procedure call;
network-wide virtual memory; distributed file systems; encryption, and
multi-site concurrency control, replication, and error recovery.
Prerequisites: 240B and 244. 3 units, Spr (Cheriton)
CS341. Distributed Systems Project--Companion project option for students
taking 340.
Corequisite: 340. 3 units, Spr (Cheriton)
CS343. Topics in Compilers---Focus is on compilers for parallel
architectures. Lectures/discussions explore program analysis techniques
and code optimizations for a variety of parallel machines, including
the superscalars, distributed memory machines, and multiprocessors. A
significant project is included. Prerequisite: 243. 3-6 units, Win
(Lam)
EE484/CS344. Computer Networks: Modeling and Analysis. Network
functions, architectures and protocols; computer traffic
characterization; resource sharing; packet-switched store-and-forward
networks (e.g., ARPAnet): delay analysis, network design and
optimization including capacity assignment, routing and topological
design; multi-access/broadcast protocols (used in packet-switched
satellite, ground radio, and local networks): fixed assignment, random
access, demand assignment, adaptive strategies, stability
considerations and dynamic control. Recommended: knowledge of 244.
Prerequisite: 265. 3 units, Spr (Tobagi)
CS347. Distributed Databases---For students with some background in
database systems, emphasizing principles and system organization of
distributed databases. Overview of distributed databases. Levels of
distribution transparency. Data fragmentation and allocation.
Distributed database design. Equivalence transformations for queries.
Query translation. Query optimization. Theory and applications of
semijoins. Properties of transactions in distributed databases.
Reliability of transactions in distributed databases. Reliability of
transactions. The 2-phase commitment protocol. Concurrency control
based on locking. Concurrency control based on timestamps. Nonblocking
commit protocols. Reliability of distributed databases. Review of
commercial systems and research prototypes.
Prerequisite: 245. Recommended: 145 and a familiarity with computer
networks. 3 units, Sum/Aut (Ceri)
CS352. Foundations of Control Structures---Theory of
constructs for controlling program execution. Theories of parallel
control: temporal logic, CCS, CSP. Models of parallel control: state
trajectories, synchronization trees, execution traces, partial orders,
metric spaces. Notions of time: ordered, real, probabilistic. Related
soundness, completeness, and complexity issues.
Prerequisites. CS353 and CS258, or consent of instructor. 3 units,
Spr (Pratt)
CS355: Reasoning about Finite-state Concurrent Systems. Automatic
methods for analyzing finite-state systems, with emphasis on automatic
formal verification. Students should have a good understanding of
basic automata and complexity theory, in addition to an
undergraduate-level background in Computer Science. Topics: state
graph and automata models of system behavior. Automata on infinite
strings. Linear and branching-time temporal logic. Model-checking.
Applications to circuits, algorithms, and protocols. Modelling
real-time systems. Analysis methods based on Boolean formulas, and
other ways of coping with the "state explosion problem." Prerequisite:
CS154 or CS254. Recommended: Good understanding of basic automata and
complexity theory, and undergraduate-level background in computer
science. 3 units, Win (Dill)
CS357A. Reactive Systems: The Languages--Reactive systems maintain an
on-going interaction with their environment, e.g., concurrent,
distributed, and real-time programs. Basic models of reactive systems:
generic transition system, shared variables, semaphores and other
synchronization constructs, communication constructs, asynchronous and
synchronous message passing, petri nets. Faithful modeling of
concurrency: interleaving, fairness. Specification language of
temporal logic: temporal operators, future and past formula, axioms
and rules, temporal and program validity. Specification of programs:
hierarchy of program properties, classes of safety, termination,
intermittence, response, persistence, and progress properties.
Prerequisite: 157 or equivalent. 3 units, Win (Manna)
CS357B. Reactive Systems: Verification and Development---Formal methods
for verification and development of reactive programs, based on
formalism of temporal logic that has been specifically developed to
reason about behaviors of reactive programs. Methodologies for formal
verification of properties of reactive program: proving safety,
liveness, precedence, causality, response, and progress properties.
Derived development approaches. State invariances, incremental
verification, heuristics, assertion diagrams, overtaking analysis,
forward and backward analysis, history variables. Chain and
well-founded rules. Verification under assumption of fairness. Programs
with large number of similar processes. Real-time programs.
Assertional proof methods. Case of finite-state programs and automatic
verification tools for this case. Predicate automata.
Prerequisites: 157 or equivalent, and 357A. 3 units, Spr (Manna)
CS367A. Parallel Computation--Introduction to theoretical issues in
parallel computation. Topics: Parallel machine models. Design and
analysis of algorithms on systolic arrays: arithmetic operations,
simple graph algorithms. Algorithms for hypercube-related networks:
sorting, routing. PRAM model of computation. Basic PRAM algorithms:
prefix computation, sorting, shortest paths, minimum-weight spanning
tree. Reducing the processor-time product. Simulation of stronger
PRAM models by weaker ones. Complexity issues: definition of NC and
P-completeness; some simple lower bounds.
3 units (Plotkin) alternate years, given 1991-92
CS367B. Parallel Computation--Advanced parallel algorithms. Possible
topics: Parallel algorithms for maximal independent set and related
problems; parallel graph coloring. Evaluation of straight-line code,
P-complete problems. Deterministic and randomized parallel algorithms
for flows and related problems; assignment problem, matching in general
graphs.
3 units, Win (Plotkin)
CS441. Topics in ADA Programming. The ADA language is used as an example
for discussing current research in high level languages for programming
large systems and distributed systems. Related developments in
specification languages are discussed. Part 1 (the ADA language design
and programming techniques): multi-task programming, compilation
algorithms for tasking, runtime supervisors for distributed systems in
ADA, detection of concurrency error: comparison of ADA with other high
level concurrent languages. Part 2: design of specification languages
related to ADA, specification, validation, and verification methods for
multi-task programs; environments for programming with specifications.
Prerequisite: 107. 3 or 4 units, Win (Luckham)
The following two-course sequence while not exclusively about parallel
computing is an essential prerequisite for and leads into 340 and 341.
CS240A,B. Operating Systems--Two-quarter sequence in operating systems
design and implementation. 240A: Fundamentals of operating system
implementation--basic structure; multi-programming, processes, and
scheduling; synchronization and communication mechanisms; I/O device
management; memory management, segmentation, paging; file systems,
directory management, disk allocation. 240B: Deeper coverage of issues
that arise in all subsystems of an operating system; naming and I/O
protocols; protection; reliability; performance; user interfaces; and
networking.
Prerequisite for 240A: 140 or equivalent. Prerequisite for 240B: 240A.
Some other courses have substantial parallel computing components.
================================
Faculty with research interests bearing on parallel or distributed
computation, including foundational work in theory and cooperative and
multiple-agent work in AI:
David Cheriton. Design of distributed computer systems, parallel
computer systems, communication protocols. Techniques for high-speed
computer communication. Developing a multiprocessor workstation with
focus on parallel distributed operating system techniques.
David Dill. Finite-state concurrent systems, protocol and hardware
verification, asynchronous circuits, concurrency. Automatic
verification of asynchronous circuit designs.
Michael Flynn. Investigation of approaches to massively parallel
machines. Studying characteristics of parallel processors such as
limits on their performance.
Andrew Goldberg. Design and analysis of combinatorial parallel and
distributed algorithms.
Gene Golub. Numerical algorithms for parallel computation.
Anoop Gupta. Design of general-purpose scalable parallel computer
architectures. Design of interconnection networks for multiprocessors,
understanding the role of locality. Building a scalable
directory-based shared-memory (DASH) multiprocessor using very high
performance individual nodes. Multiprocessor scheduling. Design of a
concurrent object-oriented programming language.
John Hennessy. Compiler systems for multiprocessors and very high
performance architectures.
Monica Lam. Parallel systems. Languages, compilers and architectures
that cooperate to deliver performance.
Jean-Claude Latombe. Reasoning in multiple-agent worlds. Representing
one agent's knowledge about other agents' knowledge, recognizing other
agents' goals, reasoning about time-dependent interactions with other
agents and modeling multiple-agent society laws.
Marc Levoy. Parallel algorithms and architectures for image synthesis
and scientific visualization, with emphasis on real-time display of
complex scenes and data.
David Luckham. Design of prototyping languages for large distributed
time-critical systems Specification languages for parallel programs.
Edward McCluskey. Techniques for concurrent checking of system
operation.
Teresa Meng. Parallel algorithms and low power implementation of such
algorithms. Wireless communication of video signals for small
devices.
Zohar Manna. Conncurrent programming. Development of a temporal logic
for the specification and verification of concurrent programs.
Rajeev Motwani. Design and analysis of algorithms and data structures
with particular emphasis on parallelism.
Nils Nilsson. Communicating, distributed AI systems and robots.
Serge Plotkin. Design and analysis of parallel algorithms and data
structures. Theoretical issues in distributed computation.
Vaughan Pratt. Process specification languages for analog and digital
domains with computational and noncomputational applications.
David Rumelhart. Connectionist approach to AI and cognitive science.
Yoav Shoham. Agent Oriented Programming. Theoretical foundations of
ascribing mental qualities to machines. Design, implementation and
application of programming languages for agents with mental state,
languages which include communicative speech acts.
Fouad Tobagi. Telecommunications networks, computer networks, packet
radio, high speed networks and their interfaces, broadband ISDN, fast
packet switches, network protocols.
Daniel Weise. Parallel processing, techniques for parallelizing code
for multiple processors.
Gio Wiederhold. Conceptual database models for distributed databases;
object models for multi-user database query and update processing
interfaces.
Terry Winograd. Design of computer systems for cooperative work,
focusing on the social activity by which people generate the space of
cooperative actions in which they work.